Use MonitorUpdatingPersister#456
Conversation
src/io/test_utils.rs
Outdated
| let persister_0 = MonitorUpdatingPersister::new( | ||
| store_0, | ||
| &chanmon_cfgs[0].logger, | ||
| persister_0_max_pending_updates, | ||
| &chanmon_cfgs[0].keys_manager, | ||
| &chanmon_cfgs[0].keys_manager, | ||
| &chanmon_cfgs[0].tx_broadcaster, | ||
| &chanmon_cfgs[0].fee_estimator, | ||
| ); | ||
|
|
||
| let persister_1 = MonitorUpdatingPersister::new( | ||
| store_1, | ||
| &chanmon_cfgs[1].logger, | ||
| persister_1_max_pending_updates, | ||
| &chanmon_cfgs[1].keys_manager, | ||
| &chanmon_cfgs[1].keys_manager, | ||
| &chanmon_cfgs[1].tx_broadcaster, | ||
| &chanmon_cfgs[1].fee_estimator, | ||
| ); |
There was a problem hiding this comment.
Given the duplication, consider extracting this into a helper function.
src/builder.rs
Outdated
| let persister = Arc::new(Persister::new( | ||
| Arc::clone(&kv_store), | ||
| Arc::clone(&logger), | ||
| 10, // (?) |
There was a problem hiding this comment.
I also don't know what value to recommend here, but pending when @tnull takes a look, you could define a constant to hold this value.
There was a problem hiding this comment.
My guess is as good as any, as we don't have super good data on this parameter so far. I think we could bump it to 100 to start with and then see if we run into any issue with it. I agree that we should introduce a const for it though.
There was a problem hiding this comment.
FWIW I agree with you @tnull.
Our node ran on 100 for a long time and is 1000 now, but you would probably want 100 for most people. I dunno your design philosophy about what to make configurable, but it's probably a small number of power users who would want something other than 100.
enigbe
left a comment
There was a problem hiding this comment.
Hi @arturgontijo, thanks you for submitting this PR. I have done an initial review and left two minor comments regarding the hard-coding of values and using helper functions to reduce duplicated logic. From my POV, this looks good and I am happy to take another look when the comments are addressed.
tnull
left a comment
There was a problem hiding this comment.
Cool, thank you very much for looking into this! Excuse the delay, I have to admit it fell of my radar for a few days.
The changes already look pretty good to me, I just have a few minor nits / comments.
Given that this patch, while rather small, affects really the core of the node's functionality, I'm considering holding off landing this for a few days until after the upcoming 0.5 release. This would give us more time to spot any funky behavior that could arise from this, especially since there have been considerable changes made to the MonitorUpdatingPersister on LDK main recently that will ship with LDK v0.2, and further changes to the persistence/storage layer in LDK Node are planned for v0.6.
TLDR: while the patch is pretty simple, I'll go ahead and tag this PR for milestone 0.6, hopefully landing soon after v0.5 shipped.
src/builder.rs
Outdated
| let persister = Arc::new(Persister::new( | ||
| Arc::clone(&kv_store), | ||
| Arc::clone(&logger), | ||
| 10, // (?) |
There was a problem hiding this comment.
My guess is as good as any, as we don't have super good data on this parameter so far. I think we could bump it to 100 to start with and then see if we run into any issue with it. I agree that we should introduce a const for it though.
|
|
||
| use lightning::events::ClosureReason; | ||
| use lightning::util::test_utils; | ||
| use lightning::{check_added_monitors, check_closed_broadcast, check_closed_event}; |
There was a problem hiding this comment.
nit: Not quite sure why the import was changed, but if we already do so, why not move the others up, too?
src/io/test_utils.rs
Outdated
|
|
||
| let monitor_name = MonitorName::from(mon.get_funding_txo().0); | ||
| assert_eq!( | ||
| store_0 |
There was a problem hiding this comment.
nit: Would prefer if we could move this to a variable to avoid the additional indentation and verticality in the assert_eq (same below).
src/io/test_utils.rs
Outdated
| sender = 0; | ||
| receiver = 1; | ||
| } | ||
| send_payment(&nodes[sender], &vec![&nodes[receiver]][..], 21_000); |
There was a problem hiding this comment.
You can avoid the unnecessary allocations here and below by simply dropping vec!, given that you're making it a slice anyways.
| send_payment(&nodes[sender], &vec![&nodes[receiver]][..], 21_000); | |
| send_payment(&nodes[sender], &[&nodes[receiver]][..], 21_000); |
also, consider moving these to appropriately named variables for clarity.
|
Hey @enigbe and @tnull, thanks for the initial review. Regarding I'll address all the suggestions and also agree that it can bring side effects that we are not anticipating yet. |
MonitorUpdatingPersister
@tnull, I would prefer if we prioritize this over the big refactor in 0.6 and ldk-0.2. This is mainly because it would be best if However, one drawback of the current approach in the PR is that "LDK Node moves to MonitorUpdatingPersister" without user request/input. We might want to consider that bit carefully or at least make it optional. Since, once the node's persistence moves to |
Well, yes, I intend to land this ~first thing after 0.5 shipped. So it will be prioritized for 0.6, but we will (need to) add the refactors in that release.
That's a good point and btw one of the reason why I decided to hold off on this PR for 0.5: I first want to take some time to think through (and test) some of the consequences, especially to figure out if there are relevant use cases that would prefer the old style, and if we should support them. |
|
@arturgontijo We now shipped 0.5, so could consider to move forward with this PR soon. It also needs a rebase by now. |
Cool, let me resolve that conflict and then we can discuss next steps. |
# Conflicts: # src/io/test_utils.rs
I can give some input as a user. After about 20k channel monitor updates payments start to slow down substantially. This issue was noticed a few years ago on our routing node and using this |
Thanks for the feedback! Btw, you might be interested in the related work going over at |
|
Why was this closed? |
This PR got blocked for some reason (that I can't recall now) and I decided to move on to another stuff, feel free to pick things here and open a new PR |
This PR aims to solve #441 , making use of
MonitorUpdatingPersisterin theChainMonitor.Note: Not sure about a proper value for its
maximum_pending_updates(set it to10for now).